攻击神经机翻译模型是离散序列的本身组合任务,解决了近似启发式。大多数方法使用梯度独立地攻击每个样品上的模型。我们可以学会产生有意义的对抗攻击吗?而不是机械地应用梯度与现有方法相比,我们学会通过基于语言模型训练对抗性发生器来攻击模型。我们提出了蒙面的对抗生成(MAG)模型,该模型在整个培训过程中学会扰乱翻译模型。实验表明,它提高了机器翻译模型的鲁棒性,同时比竞争方法更快。
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
The following article presents a memetic algorithm with applying deep reinforcement learning (DRL) for solving practically oriented dual resource constrained flexible job shop scheduling problems (DRC-FJSSP). In recent years, there has been extensive research on DRL techniques, but without considering realistic, flexible and human-centered shopfloors. A research gap can be identified in the context of make-to-order oriented discontinuous manufacturing as it is often represented in medium-size companies with high service levels. From practical industry projects in this domain, we recognize requirements to depict flexible machines, human workers and capabilities, setup and processing operations, material arrival times, complex job paths with parallel tasks for bill of material (BOM) manufacturing, sequence-depended setup times and (partially) automated tasks. On the other hand, intensive research has been done on metaheuristics in the context of DRC-FJSSP. However, there is a lack of suitable and generic scheduling methods that can be holistically applied in sociotechnical production and assembly processes. In this paper, we first formulate an extended DRC-FJSSP induced by the practical requirements mentioned. Then we present our proposed hybrid framework with parallel computing for multicriteria optimization. Through numerical experiments with real-world data, we confirm that the framework generates feasible schedules efficiently and reliably. Utilizing DRL instead of random operations leads to better results and outperforms traditional approaches.
translated by 谷歌翻译
Current large language models can perform reasonably well on complex tasks that require step-by-step reasoning with few-shot learning. Are these models applying reasoning skills they have learnt during pre-training and reason outside of their training context, or are they simply memorizing their training corpus at finer granularity and have learnt to better understand their context? To tease apart these possibilities, we introduce ALERT, a benchmark and suite of analyses for assessing language models' reasoning ability comparing pre-trained and finetuned models on complex tasks that require reasoning skills to solve. ALERT provides a test bed to asses any language model on fine-grained reasoning skills, which spans over 20 datasets and covers 10 different reasoning skills. We leverage ALERT to further investigate the role of finetuning. With extensive empirical analysis we find that language models learn more reasoning skills such as textual entailment, abductive reasoning, and analogical reasoning during finetuning stage compared to pretraining state. We also find that when language models are finetuned they tend to overfit to the prompt template, which hurts the robustness of models causing generalization problems.
translated by 谷歌翻译
黑框模型的鲁棒性研究被认为是基于结构方程和从数据中学到的预测模型的数值模型的必要任务。这些研究必须评估模型的鲁棒性,以实现其输入的可能错误指定(例如,协变量转移)。通过不确定性定量(UQ)的棱镜对黑盒模型的研究通常基于涉及输入上施加的概率结构的灵敏度分析,而ML模型仅由观察到的数据构建。我们的工作旨在通过为这两个范式提供相关且易于使用的工具来统一UQ和ML可解释性方法。为了为鲁棒性研究提供一个通用且易于理解的框架,我们定义了依赖于概率指标之间的瓦斯汀距离的分位数约束和投影的输入信息的扰动,同时保留其依赖性结构。我们表明,可以通过分析解决这个扰动问题。通过等渗多项式近似确保规律性约束会导致更平滑的扰动,这在实践中可能更适合。从UQ和ML领域进行的实际案例研究的数值实验突出了此类研究的计算可行性,并提供了对黑盒模型鲁棒性的局部和全球见解,以输入扰动。
translated by 谷歌翻译
声词嵌入(AWES)的模型学会将可变长度的口语段映射到固定差异矢量表示上,以便在嵌入空间附近预计,同一单词的不同声学示例。除了他们的语音技术应用外,AWE模型还显示出可以预测各种听觉词汇处理任务的人类绩效。当前的敬畏模型基于神经网络,并以自下而上的方法进行了培训,该方法集成了声音提示,以构建给定声或符号监督信号的单词表示。因此,这些模型在学习过程中不会利用或捕获高级词汇知识。 %并捕获有关单词形式的低级信息。在本文中,我们提出了一个多任务学习模型,该模型将自上而下的词汇知识纳入了敬畏的训练程序中。我们的模型学习了声学输入和词汇表示之间的映射,该表示除了基于自下而上的表单监督外,还编码了高级信息,例如单词语义。我们尝试三种语言,并证明合并词汇知识可以改善嵌入空间的可区分性,并鼓励模型更好地分开词汇类别。
translated by 谷歌翻译
数值验证是机器学习研究的核心,因为它允许评估新方法的实际影响,并确认理论和实践之间的一致性。然而,该领域的快速发展构成了一些挑战:研究人员面临着大量的方法来比较,有限的透明度和最佳实践的共识以及乏味的重新实施工作。结果,验证通常是非常部分的,这可能会导致错误的结论,从而减慢研究的进展。我们提出了Benchopt,这是一个协作框架,旨在在跨编程语言和硬件体系结构的机器学习中自动化,复制和发布优化基准。 Benchopt通过提供用于运行,共享和扩展实验的现成工具来简化社区的基准测试。为了展示其广泛的可用性,我们在三个标准学习任务上展示基准:$ \ ell_2 $ regulaine的逻辑回归,套索和RESNET18用于图像分类的培训。这些基准强调了关键的实际发现,这些发现对这些问题的最新问题更加细微,这表明在实际评估中,魔鬼在细节上。我们希望Benchopt能在社区中促进合作工作,从而改善研究结果的可重复性。
translated by 谷歌翻译
炼金术是一种富有的新的元学习环境,可以包含有趣的抽象,但简单足以使得细粒度分析易于进行。此外,Alchemy提供了一种可选的符号界面,可使元RL研究无需大量计算预算。在这项工作中,我们采取了第一个步骤迈出了使用符号炼金术来识别设计选择,使得能够学习各种类型的抽象。然后,使用各种行为和内省分析我们调查我们训练的代理商如何使用和代表抽象任务变量,并找到对抽象的神经科学的兴趣连接。我们通过讨论使用Meta-RL和Alchemy以更好地理解大脑中抽象变量的表示的下一个步骤来结束。
translated by 谷歌翻译
建立一个小型的快速监控系统模型,适合有限的资源设备是一个具有挑战性的,但却是一个重要的任务。卷积神经网络(CNNS)在检测和分类任务中取代了传统的特征提取和机器学习模型。提出了各种复杂的大型CNN模型,从而实现了精度的显着改善。最近介绍了轻量级CNN型号用于实时任务。本文介绍了一种基于CNN的轻量级模型,可以适合诸如覆盆子PI的有限边缘装置。我们所提出的模型提供了具有更好的性能时间,较小的尺寸和与现有方法的可比准确度。在多个基准数据集中评估模型性能。它也与现有模型相比,在大小,平均处理时间和F分数方面。建议未来研究的其他增强功能。
translated by 谷歌翻译